Unsupervised Galaxy Morphological Visual Representation with Deep Contrastive Learning

نویسندگان

چکیده

Abstract Galaxy morphology reflects structural properties that contribute to the understanding of formation and evolution galaxies. Deep convolutional networks have proven be very successful in learning hidden features allow for unprecedented performance morphological classification Such mostly follow supervised paradigm, which requires sufficient labeled data training. However, labeling a million galaxies is an expensive complicated process, particularly forthcoming survey projects. In this paper, we present approach, based on contrastive learning, with aim galaxy visual representation using only unlabeled data. Considering low semantic information contour dominated images, feature extraction layer proposed method incorporates vision transformers network provide rich via fusion multi-hierarchy features. We train test our three classifications sets from Zoo 2 SDSS-DR17, four DECaLS. The testing accuracy achieves 94.7%, 96.5% 89.9%, respectively. experiment cross validation demonstrates model possesses transfer generalization ability when applied new sets. code reveals pretrained models are publicly available can easily adapted surveys. 6 https://github.com/kustcn/galaxy_contrastive

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Unsupervised Domain Adaptation for Image Classification via Low Rank Representation Learning

Domain adaptation is a powerful technique given a wide amount of labeled data from similar attributes in different domains. In real-world applications, there is a huge number of data but almost more of them are unlabeled. It is effective in image classification where it is expensive and time-consuming to obtain adequate label data. We propose a novel method named DALRRL, which consists of deep ...

متن کامل

Early Visual Concept Learning with Unsupervised Deep Learning

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressure...

متن کامل

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adve...

متن کامل

Deep unsupervised learning of visual similarities

Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to Computer Vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolu...

متن کامل

Deep Trans-layer Unsupervised Networks for Representation Learning

Learning features from massive unlabelled data is a vast prevalent topic for highlevel tasks in many machine learning applications. The recent great improvements on benchmark data sets achieved by increasingly complex unsupervised learning methods and deep learning models with lots of parameters usually requires many tedious tricks and much expertise to tune. However, filters learned by these c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Publications of the Astronomical Society of the Pacific

سال: 2022

ISSN: ['0004-6280', '1538-3873']

DOI: https://doi.org/10.1088/1538-3873/aca04e